22 research outputs found

    A Note on Hardness of Diameter Approximation

    Full text link
    We revisit the hardness of approximating the diameter of a network. In the CONGEST model of distributed computing, Ω~(n) \tilde \Omega (n) rounds are necessary to compute the diameter [Frischknecht et al. SODA'12], where Ω~() \tilde \Omega (\cdot) hides polylogarithmic factors. Abboud et al. [DISC 2016] extended this result to sparse graphs and, at a more fine-grained level, showed that, for any integer 1polylog(n) 1 \leq \ell \leq \operatorname{polylog} (n) , distinguishing between networks of diameter 4+2 4 \ell + 2 and 6+1 6 \ell + 1 requires Ω~(n) \tilde \Omega (n) rounds. We slightly tighten this result by showing that even distinguishing between diameter 2+1 2 \ell + 1 and 3+1 3 \ell + 1 requires Ω~(n) \tilde \Omega (n) rounds. The reduction of Abboud et al. is inspired by recent conditional lower bounds in the RAM model, where the orthogonal vectors problem plays a pivotal role. In our new lower bound, we make the connection to orthogonal vectors explicit, leading to a conceptually more streamlined exposition.Comment: Accepted to Information Processing Letter

    Decremental Single-Source Shortest Paths on Undirected Graphs in Near-Linear Total Update Time

    Full text link
    In the decremental single-source shortest paths (SSSP) problem we want to maintain the distances between a given source node ss and every other node in an nn-node mm-edge graph GG undergoing edge deletions. While its static counterpart can be solved in near-linear time, this decremental problem is much more challenging even in the undirected unweighted case. In this case, the classic O(mn)O(mn) total update time of Even and Shiloach [JACM 1981] has been the fastest known algorithm for three decades. At the cost of a (1+ϵ)(1+\epsilon)-approximation factor, the running time was recently improved to n2+o(1)n^{2+o(1)} by Bernstein and Roditty [SODA 2011]. In this paper, we bring the running time down to near-linear: We give a (1+ϵ)(1+\epsilon)-approximation algorithm with m1+o(1)m^{1+o(1)} expected total update time, thus obtaining near-linear time. Moreover, we obtain m1+o(1)logWm^{1+o(1)} \log W time for the weighted case, where the edge weights are integers from 11 to WW. The only prior work on weighted graphs in o(mn)o(m n) time is the mn0.9+o(1)m n^{0.9 + o(1)}-time algorithm by Henzinger et al. [STOC 2014, ICALP 2015] which works for directed graphs with quasi-polynomial edge weights. The expected running time bound of our algorithm holds against an oblivious adversary. In contrast to the previous results which rely on maintaining a sparse emulator, our algorithm relies on maintaining a so-called sparse (h,ϵ)(h, \epsilon)-hop set introduced by Cohen [JACM 2000] in the PRAM literature. An (h,ϵ)(h, \epsilon)-hop set of a graph G=(V,E)G=(V, E) is a set FF of weighted edges such that the distance between any pair of nodes in GG can be (1+ϵ)(1+\epsilon)-approximated by their hh-hop distance (given by a path containing at most hh edges) on G=(V,EF)G'=(V, E\cup F). Our algorithm can maintain an (no(1),ϵ)(n^{o(1)}, \epsilon)-hop set of near-linear size in near-linear time under edge deletions.Comment: Accepted to Journal of the ACM. A preliminary version of this paper was presented at the 55th IEEE Symposium on Foundations of Computer Science (FOCS 2014). Abstract shortened to respect the arXiv limit of 1920 character

    Fully dynamic all-pairs shortest paths with worst-case update-time revisited

    Full text link
    We revisit the classic problem of dynamically maintaining shortest paths between all pairs of nodes of a directed weighted graph. The allowed updates are insertions and deletions of nodes and their incident edges. We give worst-case guarantees on the time needed to process a single update (in contrast to related results, the update time is not amortized over a sequence of updates). Our main result is a simple randomized algorithm that for any parameter c>1c>1 has a worst-case update time of O(cn2+2/3log4/3n)O(cn^{2+2/3} \log^{4/3}{n}) and answers distance queries correctly with probability 11/nc1-1/n^c, against an adaptive online adversary if the graph contains no negative cycle. The best deterministic algorithm is by Thorup [STOC 2005] with a worst-case update time of O~(n2+3/4)\tilde O(n^{2+3/4}) and assumes non-negative weights. This is the first improvement for this problem for more than a decade. Conceptually, our algorithm shows that randomization along with a more direct approach can provide better bounds.Comment: To be presented at the Symposium on Discrete Algorithms (SODA) 201

    Fully Dynamic Spanners with Worst-Case Update Time

    Get PDF
    An alpha-spanner of a graph G is a subgraph H such that H preserves all distances of G within a factor of alpha. In this paper, we give fully dynamic algorithms for maintaining a spanner H of a graph G undergoing edge insertions and deletions with worst-case guarantees on the running time after each update. In particular, our algorithms maintain: - a 3-spanner with ~O(n^{1+1/2}) edges with worst-case update time ~O(n^{3/4}), or - a 5-spanner with ~O(n^{1+1/3}) edges with worst-case update time ~O (n^{5/9}). These size/stretch tradeoffs are best possible (up to logarithmic factors). They can be extended to the weighted setting at very minor cost. Our algorithms are randomized and correct with high probability against an oblivious adversary. We also further extend our techniques to construct a 5-spanner with suboptimal size/stretch tradeoff, but improved worst-case update time. To the best of our knowledge, these are the first dynamic spanner algorithms with sublinear worst-case update time guarantees. Since it is known how to maintain a spanner using small amortized}but large worst-case update time [Baswana et al. SODA\u2708], obtaining algorithms with strong worst-case bounds, as presented in this paper, seems to be the next natural step for this problem

    Brief Announcement: A Note on Hardness of Diameter Approximation

    Get PDF
    We revisit the hardness of approximating the diameter of a network. In the CONGEST model, ~Omega(n) rounds are necessary to compute the diameter [Frischknecht et al. SODA\u2712]. Abboud et al. [DISC 2016] extended this result to sparse graphs and, at a more fine-grained level, showed that, for any integer 1 <= l <= polylog(n)distinguishing between networks of diameter 4l + 2 and 6l + 1 requires ~Omega(n) rounds. We slightly tighten this result by showing that even distinguishing between diameter 2l + 1 and 3l + 1 requires ~Omega(n) rounds. The reduction of Abboud et al. is inspired by recent conditional lower bounds in the RAM model, where the orthogonal vectors problem plays a pivotal role. In our new lower bound, we make the connection to orthogonal vectors explicit, leading to a conceptually more streamlined exposition. This is suited for teaching both the lower bound in the CONGEST model and the conditional lower bound in the RAM model

    Improved Algorithms for Computing the Cycle of Minimum Cost-to-Time Ratio in Directed Graphs

    Get PDF
    We study the problem of finding the cycle of minimum cost-to-time ratio in a directed graph with n nodes and m edges. This problem has a long history in combinatorial optimization and has recently seen interesting applications in the context of quantitative verification. We focus on strongly polynomial algorithms to cover the use-case where the weights are relatively large compared to the size of the graph. Our main result is an algorithm with running time ~O(m^{3/4} n^{3/2}), which gives the first improvement over Megiddo\u27s ~O(n^3) algorithm [JACM\u2783] for sparse graphs (We use the notation ~O(.) to hide factors that are polylogarithmic in n.) We further demonstrate how to obtain both an algorithm with running time n^3/2^{Omega(sqrt(log n)} on general graphs and an algorithm with running time ~O(n) on constant treewidth graphs. To obtain our main result, we develop a parallel algorithm for negative cycle detection and single-source shortest paths that might be of independent interest

    Near-Optimal Approximate Shortest Paths and Transshipment in Distributed and Streaming Models

    Get PDF
    We present a method for solving the shortest transshipment problem - also known as uncapacitated minimum cost flow - up to a multiplicative error of (1 + epsilon) in undirected graphs with non-negative integer edge weights using a tailored gradient descent algorithm. Our gradient descent algorithm takes epsilon^(-3) polylog(n) iterations, and in each iteration it needs to solve an instance of the transshipment problem up to a multiplicative error of polylog(n), where n is the number of nodes. In particular, this allows us to perform a single iteration by computing a solution on a sparse spanner of logarithmic stretch. Using a careful white-box analysis, we can further extend the method to finding approximate solutions for the single-source shortest paths (SSSP) problem. As a consequence, we improve prior work by obtaining the following results: (1) Broadcast CONGEST model: (1 + epsilon)-approximate SSSP using ~O((sqrt(n) + D) epsilon^(-O(1))) rounds, where D is the (hop) diameter of the network. (2) Broadcast congested clique model: (1 + epsilon)-approximate shortest transshipment and SSSP using ~O(epsilon^(-O(1))) rounds. (3) Multipass streaming model: (1 + epsilon)-approximate shortest transshipment and SSSP using ~O(n) space and ~O(epsilon^(-O(1))) passes. The previously fastest SSSP algorithms for these models leverage sparse hop sets. We bypass the hop set construction; computing a spanner is sufficient with our method. The above bounds assume non-negative integer edge weights that are polynomially bounded in n; for general non-negative weights, running times scale with the logarithm of the maximum ratio between non-zero weights. In case of asymmetric costs for traversing an edge in opposite directions, running times scale with the maximum ratio between the costs of both directions over all edges
    corecore